8 research outputs found

    High-performance and fault-tolerant techniques for massive data distribution in online communities

    Get PDF
    The amount of digital information produced and consumed is increasing each day. This rapid growth is motivated by the advances in computing power, hardware technologies, and the popularization of user generated content networks. New hardware is able to process larger quantities of data, which permits to obtain finer results, and as a consequence more data is generated. In this respect, scientific applications have evolved benefiting from the new hardware capabilities. This type of application is characterized by requiring large amounts of information as input, generating a significant amount of intermediate data resulting in large files. This increase not only appears in terms of volume, but also in terms of size, we need to provide methods that permit a efficient and reliable data access mechanism. Producing such a method is a challenging task due to the amount of aspects involved. However, we can leverage the knowledge found in social networks to improve the distribution process. In this respect, the advent of the Web 2.0 has popularized the concept of social network, which provides valuable knowledge about the relationships among users, and the users with the data. However, extracting the knowledge and defining ways to actively use it to increase the performance of a system remains an open research direction. Additionally, we must also take into account other existing limitations. In particular, the interconnection between different elements of the system is one of the key aspects. The availability of new technologies such as the mass-production of multicore chips, large storage media, better sensors, etc. contributed to the increase of data being produced. However, the underlying interconnection technologies have not improved with the same speed as the others. This leads to a situation where vast amounts of data can be produced and need to be consumed by a large number of geographically distributed users, but the interconnection between both ends does not match the required needs. In this thesis, we address the problem of efficient and reliable data distribution in a geographically distributed systems. In this respect, we focus on providing a solution that 1) optimizes the use of existing resources, 2) does not requires changes in the underlying interconnection, and 3) provides fault-tolerant capabilities. In order to achieve this objectives, we define a generic data distribution architecture composed of three main components: community detection module, transfer scheduling module, and distribution controller. The community detection module leverages the information found in the social network formed by the users requesting files and produces a set of virtual communities grouping entities with similar interests. The transfer scheduling module permits to produce a plan to efficiently distribute all requested files improving resource utilization. For this purpose, we model the distribution problem using linear programming and offer a method to permit a distributed solving of the problem. Finally, the distribution controller manages the distribution process using the aforementioned schedule, controls the available server infrastructure, and launches new on-demand resources when necessary

    Exploiting parallelism in a X-ray tomography reconstruction algorithm on hybrid multi-GPU and multi-core platforms

    Get PDF
    Proceedings of: 2012 10 th IEEE International Symposium on Parallel and Distributes Processing with Applicatioons (ISPA 2012). Leganés, Madrid, 10-13 July 2012.Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. Image reconstruction in these systems is usually performed by approximate methods based on the algorithm proposed by Feldkamp et al. Currently there are a strong need to speed-up the reconstruction of XRay CT data in order to extend its clinical applications. We present an efficient modular implementation of an FDK-based reconstruction algorithm that takes advantage of the parallel computing capabilities and the efficient bilinear interpolation provided by general purpose graphic processing units (GPGPU). The proposed implementation of the algorithm is evaluated for a high-resolution micro-CT and achieves a speed-up of 46, while preserving the reconstructed image qualiThis work has been partially funded by AMIT Project CDTI CENIT, TEC2007-64731, TEC2008-06715-C02-01, RD07/0014/2009, TRA2009 0175, RECAVA-RETIC, RD09/0077/00087 (Ministerio de Ciencia e Innovacion), ARTEMIS S2009/DPI-1802 (Comunidad de Madrid), and TIN2010-16497 (Ministerio de Ciencia e Innovacion).Publicad

    CoSMiC: A hierarchical cloudlet-based storage architecture for mobile clouds

    Get PDF
    Storage capacity is a constraint for current mobile devices. Mobile Cloud Computing (MCC) is developed to augment device capabilities, facilitating to mobile users store/access of a large dataset on the cloud through wireless networks. However, given the limitations of network bandwidth, latencies, and devices battery life, new solutions are needed to extend the usage of mobile devices. This paper presents a novel design and implementation of a hierarchical cloud storage system for mobile devices based on multiple I/O caching layers. The solution relies on Memcached as a cache system, preserving its powerful capacities such as performance, scalability, and quick and portable deployment. The solution targets to reduce the I/O latency of current mobile cloud solutions. The proposed solution consists of a user-level library and extended Memcached back-ends. The solution aims to be hierarchical by deploying Memcached-based I/O cache servers across all the I/O infrastructure datapath. Our experimental results demonstrate that CoSMiC can significantly reduce the round-trip latency in presence of low cache hit ratios compared with a 3G connection even when using a multi-level cache hierarchy

    Competencias en enfermeras especialistas y en enfermeras de práctica avanzada

    Get PDF
    10 p.Objetivo: Analizar la distribución de competencias avanzadas en enfermeras especialistas y enfermeras de práctica avanzada y evaluar su asociación con algunas características de superfil profesional.Método: Estudio transversal analítico multicéntrico. Se incluyeron enfermeras que ejercían como Enfermeras de Práctica Avanzada y enfermeras Especialistas. Se midió su nivel de competencias avanzadas percibidas, así como variables de caracterización profesional.Resultados: Doscientas setenta y siete enfermeras participaron (149 ejercían práctica avanzada y 128 especialistas), con una media de 13,88 (11,05) años como especialista y 10,48 (5,32) años como Enfermera de Práctica Avanzada. Un 28,8% tenía nivel de máster o doctorado. El 50,2% ejercía en atención primaria, el 24,9% en hospitales y el 22,7% en salud mental. El nivel global autopercibido fue elevado en las distintas competencias, siendo las dimensiones más bajas las de investigación, práctica basada en la evidencia, gestión de la calidad y seguridad y liderazgo y consultoría. Las Enfermeras de Práctica Avanzada obtuvieron mayor nivel competencial de forma global y en las dimensiones de liderazgo y consultoría, relaciones interprofesionales, gestión de cuidados y promoción de salud. No hubo diferencias en función de la experiencia o la posesión de nivel de máster o de doctorado. En las Enfermeras de Práctica Avanzada el contexto de práctica no influía en los niveles competenciales, aunque en las enfermeras especialistas sí, a favor de las que ejercían en salud mental.Conclusiones: Las enfermeras especialistas y de práctica avanzada tienen competencias distintas que deberían ser gestionadas adecuadamente para el desarrollo de los servicios enfermeros avanzados y especializados
    corecore